Cloth animation is an important area of computer graphics due to its numerous applications. However, so far a fast moving cloth with multiple wrinkles has been difficult to animate because of the cloth clump problem. Cloth clumps are the frozen areas where cloth pieces are clustered unnaturally — an obstacle in making a realistic cloth animation. Hence we present a novel cloth collision resolution algorithm that prevents clump formation during fast cloth motions. The goal of our resolution algorithm is to make cloth move swiftly without having any unnatural frozen cloth clumps, while preventing any cloth-cloth and rigid-cloth penetrations at any moment of a simulation. The non-penetration status of cloth is maintained without the formation of cloth clumps regardless of the speed of cloth motion. Our algorithm is based on a particular order that we found in the resolution of cloth collisions, and can be used with any structural modeling approaches such as spring-masses or finite elements. This paper includes several realistic simulation examples involving fast motions that are clump-free. 相似文献
In this paper the testability of modified-Booth array multipliers for standard cells based design environments is examined for first time. In such cases the structure of the cells may be unknown, thus Cell Fault Model (CFM) is adopted. Two C-testable designs are proposed. A design for an Nx × Ny bits modified-Booth multiplier, which uses ripple carry addition at the last stage of the multiplication, is first proposed. The design requires the addition of only one extra primary input and 38 test vectors with respect to CFM. A second C-testable design is given using carry lookahead addition at the last stage which is the case of practical implementations of modified-Booth multipliers. Such a C-testable design using carry lookahead addition is for first time proposed in the open literature. This second design requires the addition of 4 extra primary inputs. One-level and two-levels carry lookahead adders, are considered. The C-testable design requires 61 test vectors for the former and 73 test vectors for the latter, respectively. The hardware and delay overheads imposed by both C-testable designs are very small and decrease when the size of the multiplier increases. 相似文献
The reformation of biomass-derived ethanol to a hydrogen-rich gas stream suitable for feeding fuel cells is investigated as an efficient and environmentally friendly process for the production of electricity for mobile and stationary applications. Steam reforming of ethanol is investigated over Ni catalysts supported on La2O3, Al2O3, YSZ and MgO. The influence of several parameters on the catalytic activity and selectivity is examined including reaction temperature, water-to-ethanol ratio and space velocity. Results reveal that the Ni/La2O3 catalyst exhibits high activity and selectivity toward hydrogen production and, most important, long term stability for steam reforming of ethanol. The enhanced stability of this catalyst may be due to scavenging of coke deposition on the Ni surface by lanthanum oxycarbonate species which exist on top of the Ni particles under reaction conditions. 相似文献
In data stream environments, the initial plan of a long-running query may gradually become inefficient due to changes of the data characteristics. In this case, the query optimizer generates a more efficient plan based on the current statistics. The online transition from the old to the new plan is called dynamic plan migration. In addition to correctness, an effective technique for dynamic plan migration should achieve the following objectives: 1) minimize the memory and CPU overhead of the migration, 2) reduce the duration of the transition, and 3) maintain a steady output rate. The only known solutions for this problem are the moving states (MS) and parallel track (PT) strategies, which have some serious shortcomings related to the above objectives. Motivated by these shortcomings, we first propose HybMig, which combines the merits of MS and PT and outperforms both in every aspect. As a second step, we extend PT, MS, and HybMig to the general problem of migration, where both the new and the old plans are treated as black boxes 相似文献
To improve the mechanical properties and performances of water-atomized powder metallurgy steels, it is necessary to enhance the density. Consolidating water-atomized steel powders via conventional pressing and sintering to a relative density level > 95 pct involves processing challenges. Consolidation of gas-atomized powders to full density by hot isostatic pressing (HIP) is an established process route but utilizing water-atomized powders in HIP involves challenges that result in the formation of prior particle boundaries due to higher oxygen content. In this study, the effect of density and processing conditions on the oxide transformations and mechanical properties from conventional press and sintering, and HIP are evaluated. Hence, water-atomized Cr–Mo-alloyed powder is used and consolidated into different density levels between 6.8 and 7.3 g cm−3 by conventional die pressing and sintering. Fully dense material produced through HIP is evaluated not only of mechanical properties but also for microstructural and fractographic analysis. An empirical model based on power law is fitted to the sintered material properties to estimate and predict the properties up to full density at different sintering conditions. A model describing the mechanism of oxide transformation during sintering and HIP is proposed. The challenges when it comes to the HIP of water-atomized powder are addressed and the requirements for successful HIP processing are discussed.
Saliva is easy to access, non-invasive and a useful source of information useful for the diagnosis of serval inflammatory and immune-mediated diseases. Following the advent of genomic technologies and -omic research, studies based on saliva testing have rapidly increased and human salivary proteome has been partially characterized. As a proteomic protocol to analyze the whole saliva proteome is not currently available, the most common aim of the proteomic analysis is to discriminate between physiological and pathological conditions. The salivary proteome has been initially investigated in several diseases: oral squamous cell carcinoma and oral leukoplakia, chronic graft-versus-host disease, and Sjögren’s syndrome. Otherwise, salivary proteomics studies in the dermatological field are still in the initial phase, thus the aim of this review is to collect the best research evidence on the role of saliva proteomics analysis in immune-mediated skin diseases to understand the direction of research in this field. The results of PRISMA analysis reported herein suggest that human saliva analysis could provide significant data for the diagnosis and prognosis of several immune-mediated and inflammatory skin diseases in the next future. 相似文献
This paper presents an efficient dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A non-trivial extension of Featherstone's O(n) recursive forward dynamics algorithm is derived which allows enforcing one or more constraints on the animated figures. We demonstrate how the constraint force evaluation algorithm we have developed makes it possible to simulate collisions between articulated figures, to compute the results of impulsive forces, to enforce joint limits, to model closed kinematic loops, and to robustly control motion at interactive rates. Particular care has been taken to make the algorithm not only fast, but also easy to implement and use. To better illustrate how the constraint force evaluation algorithm works, we provide pseudocode for its major components. Additionally, we analyze its computational complexity and finally we present examples demonstrating how our system has been used to generate interactive, physically correct complex motion with small user effort. 相似文献
We present a model-based framework for incremental, adaptive object shape estimation and tracking in monocular image sequences. Parametric structure and motion estimation methods usually assume a fixed class of shape representation (splines, deformable superquadrics, etc.) that is initialized prior to tracking. Since the model shape coverage is fixed a priori, the incremental recovery of structure is decoupled from tracking, thereby limiting both processes in their scope and robustness. In this work, we describe a model-based framework that supports the automatic detection and integration of low-level geometric primitives (lines) incrementally. Such primitives are not explicitly captured in the initial model, but are moving consistently with its image motion. The consistency tests used to identify new structure are based on trinocular constraints between geometric primitives. The method allows not only an increase in the model scope, but also improves tracking accuracy by including the newly recovered features in its state estimation. The formulation is a step toward automatic model building, since it allows both weaker assumptions on the availability of a prior shape representation and on the number of features that would otherwise be necessary for entirely bottom-up reconstruction. We demonstrate the proposed approach on two separate image-based tracking domains, each involving complex 3D object structure and motion. 相似文献
A reverse nearest neighbor (RNN) query returns the data objects that have a query point as their nearest neighbor (NN). Although such queries have been studied quite extensively in Euclidean spaces, there is no previous work in the context of large graphs. In this paper, we provide a fundamental lemma, which can be used to prune the search space while traversing the graph in search for RNN. Based on it, we develop two RNN methods; an eager algorithm that attempts to prune network nodes as soon as they are visited and a lazy technique that prunes the search space when a data point is discovered. We study retrieval of an arbitrary number k of reverse nearest neighbors, investigate the benefits of materialization, cover several query types, and deal with cases where the queries and the data objects reside on nodes or edges of the graph. The proposed techniques are evaluated in various practical scenarios involving spatial maps, computer networks, and the DBLP coauthorship graph. 相似文献